14 research outputs found

    Fuzzy Least Squares Twin Support Vector Machines

    Full text link
    Least Squares Twin Support Vector Machine (LST-SVM) has been shown to be an efficient and fast algorithm for binary classification. It combines the operating principles of Least Squares SVM (LS-SVM) and Twin SVM (T-SVM); it constructs two non-parallel hyperplanes (as in T-SVM) by solving two systems of linear equations (as in LS-SVM). Despite its efficiency, LST-SVM is still unable to cope with two features of real-world problems. First, in many real-world applications, labels of samples are not deterministic; they come naturally with their associated membership degrees. Second, samples in real-world applications may not be equally important and their importance degrees affect the classification. In this paper, we propose Fuzzy LST-SVM (FLST-SVM) to deal with these two characteristics of real-world data. Two models are introduced for FLST-SVM: the first model builds up crisp hyperplanes using training samples and their corresponding membership degrees. The second model, on the other hand, constructs fuzzy hyperplanes using training samples and their membership degrees. Numerical evaluation of the proposed method with synthetic and real datasets demonstrate significant improvement in the classification accuracy of FLST-SVM when compared to well-known existing versions of SVM

    Modeling Risky Choices in Unknown Environments

    Get PDF
    Decision-theoretic models explain human behavior in choice problems involving uncertainty, in terms of individual tendencies such as risk aversion. However, many classical models of risk require knowing the distribution of possible outcomes (rewards) for all options, limiting their applicability outside of controlled experiments. We study the task of learning such models in contexts where the modeler does not know the distributions but instead can only observe the choices and their outcomes for a user familiar with the decision problems, for example a skilled player playing a digital game. We propose a framework combining two separate components, one for modeling the unknown decision-making environment and another for the risk behavior. By using environment models capable of learning distributions we are able to infer classical models of decision-making under risk from observations of the user's choices and outcomes alone, and we also demonstrate alternative models for predictive purposes. We validate the approach on artificial data and demonstrate a practical use case in modeling risk attitudes of professional esports teams.Peer reviewe

    Improving genomics-based predictions for precision medicine through active elicitation of expert knowledge

    Get PDF
    Motivation: Precision medicine requires the ability to predict the efficacies of different treatments for a given individual using high-dimensional genomic measurements. However, identifying predictive features remains a challenge when the sample size is small. Incorporating expert knowledge offers a promising approach to improve predictions, but collecting such knowledge is laborious if the number of candidate features is very large. Results: We introduce a probabilistic framework to incorporate expert feedback about the impact of genomic measurements on the outcome of interest and present a novel approach to collect the feedback efficiently, based on Bayesian experimental design. The new approach outperformed other recent alternatives in two medical applications: prediction of metabolic traits and prediction of sensitivity of cancer cells to different drugs, both using genomic features as predictors. Furthermore, the intelligent approach to collect feedback reduced the workload of the expert to approximately 11%, compared to a baseline approach.Peer reviewe

    Simulated annealing least squares twin support vector machine (SA-LSTSVM) for pattern classification

    Get PDF
    Least squares twin support vector machine (LSTSVM) is a relatively new version of support vector machine (SVM) based on non-parallel twin hyperplanes. Although, LSTSVM is an extremely efficient and fast algorithm for binary classification, its parameters depend on the nature of the problem. Problem dependent parameters make the process of tuning the algorithm with best values for parameters very difficult, which affects the accuracy of the algorithm. Simulated annealing (SA) is a random search technique proposed to find the global minimum of a cost function. It works by emulating the process where a metal slowly cooled so that its structure finally “freezes”. This freezing point happens at a minimum energy configuration. The goal of this paper is to improve the accuracy of the LSTSVMalgorithmby hybridizing it with simulated anneaing. Our research to date suggests that this improvement on the LSTSVM is made for the first time in this paper. Experimental results on several benchmark datasets demonstrate that the accuracy of the proposed algorithm is very promising when compared to other classification methods in the literature. In addition, computational time analysis of the algorithm showed the practicality of the proposed algorithm where the computational time of the algorithm falls between LSTSVM and SVM

    Methods for probabilistic modeling of knowledge elicitation for improving machine learning predictions

    No full text
    Many applications of supervised machine learning consist of training data with a large number of features and small sample size. Constructing models with reliable predictive performance in such applications is challenging. To alleviate these challenges, either more samples are required, which could be very difficult or even impossible in some applications to obtain, or additional sources of information are required to regularize models. One of the additional sources of information is the domain expert, however, extracting knowledge from a human expert can itself be difficult; it will require some computer systems that experts could effectively and effortlessly interact with. This thesis proposes novel knowledge elicitation approaches, to improve the predictive performance of statistical models. The first contribution of this thesis is to develop methods that incorporate different types of knowledge on features extracted from domain expert, into the construction of the machine learning model. Several solutions are proposed for knowledge elicitation, including interactive visualization of the effect of feedback on features, and active learning. Experiments demonstrate that the proposed methods improve the predictive performance of an underlying model through adoption of limited interaction with the user. The second contribution of the thesis is to develop a new approach to the interpretability of Bayesian predictive models to facilitate the interaction of human users with Bayesian black-box predictive models. The proposed approach separates model specification from model interpretation, via a two-stage decision--theoretical approach: first construct a highly predictive model without compromising accuracy and then optimize the interpretability. Conducted experiments demonstrate that the proposed method constructs models which are more accurate, and yet more interpretable than the alternative practice of incorporation of interpretability constraints into the model specification via prior distribution

    Human-in-the-loop Active Covariance Learning for Improving Prediction in Small Data Sets

    No full text
    Learning predictive models from small high-dimensional data sets is a key problem in high-dimensional statistics. Expert knowledge elicitation can help, and a strong line of work focuses on directly eliciting informative prior distributions for parameters. This either requires considerable statistical expertise or is laborious, as the emphasis has been on accuracy and not on efficiency of the process. Another line of work queries about importance of features one at a time, assuming them to be independent and hence missing covariance information. In contrast, we propose eliciting expert knowledge about pairwise feature similarities, to borrow statistical strength in the predictions, and using sequential decision making techniques to minimize the effort of the expert. Empirical results demonstrate improvement in predictive performance on both simulated and real data, in high-dimensional linear regression tasks, where we learn the covariance structure with a Gaussian process, based on sequential elicitation

    A decision-theoretic approach for model interpretability in Bayesian framework

    No full text
    A salient approach to interpretable machine learning is to restrict modeling to simple models. In the Bayesian framework, this can be pursued by restricting the model structure and prior to favor interpretable models. Fundamentally, however, interpretability is about users’ preferences, not the data generation mechanism; it is more natural to formulate interpretability as a utility function. In this work, we propose an interpretability utility, which explicates the trade-off between explanation fidelity and interpretability in the Bayesian framework. The method consists of two steps. First, a reference model, possibly a black-box Bayesian predictive model which does not compromise accuracy, is fitted to the training data. Second, a proxy model from an interpretable model family that best mimics the predictive behaviour of the reference model is found by optimizing the interpretability utility function. The approach is model agnostic—neither the interpretable model nor the reference model are restricted to a certain class of models—and the optimization problem can be solved using standard tools. Through experiments on real-word data sets, using decision trees as interpretable models and Bayesian additive regression models as reference models, we show that for the same level of interpretability, our approach generates more accurate models than the alternative of restricting the prior. We also propose a systematic way to measure stability of interpretabile models constructed by different interpretability approaches and show that our proposed approach generates more stable models.Peer reviewe

    Modeling Risky Choices in Unknown Environments

    No full text
    Decision-theoretic models explain human behavior in choice problems involving uncertainty, in terms of individual tendencies such as risk aversion. However, many classical models of risk require knowing the distribution of possible outcomes (rewards) for all options, limiting their applicability outside of controlled experiments. We study the task of learning such models in contexts where the modeler does not know the distributions but instead can only observe the choices and their outcomes for a user familiar with the decision problems, for example a skilled player playing a digital game. We propose a framework combining two separate components, one for modeling the unknown decision-making environment and another for the risk behavior. By using environment models capable of learning distributions we are able to infer classical models of decision-making under risk from observations of the user’s choices and outcomes alone, and we also demonstrate alternative models for predictive purposes. We validate the approach on artificial data and demonstrate a practical use case in modeling risk attitudes of professional esports teams.Peer reviewe
    corecore